Goto

Collaborating Authors

 conversation partner


Proactive Hearing Assistants that Isolate Egocentric Conversations

Hu, Guilin, Itani, Malek, Chen, Tuochao, Gollakota, Shyamnath

arXiv.org Artificial Intelligence

We introduce proactive hearing assistants that automatically identify and separate the wearer's conversation partners, without requiring explicit prompts. Our system operates on egocentric binaural audio and uses the wearer's self-speech as an anchor, leveraging turn-taking behavior and dialogue dynamics to infer conversational partners and suppress others. To enable real-time, on-device operation, we propose a dual-model architecture: a lightweight streaming model runs every 12.5 ms for low-latency extraction of the conversation partners, while a slower model runs less frequently to capture longer-range conversational dynamics. Results on real-world 2- and 3-speaker conversation test sets, collected with binaural egocentric hardware from 11 participants totaling 6.8 hours, show generalization in identifying and isolating conversational partners in multi-conversation settings. Our work marks a step toward hearing assistants that adapt proactively to conversational dynamics and engagement. More information can be found on our website: https://proactivehearing.cs.washington.edu/


Towards Multimodal Social Conversations with Robots: Using Vision-Language Models

Janssens, Ruben, Belpaeme, Tony

arXiv.org Artificial Intelligence

-- Large language models have given social robots the ability to autonomously engage in open-domain conversations. However, they are still missing a fundamental social skill: making use of the multiple modalities that carry social interactions. While previous work has focused on task-oriented interactions that require referencing the environment or specific phenomena in social interactions such as dialogue breakdowns, we outline the overall needs of a multimodal system for social conversations with robots. We then argue that vision-language models are able to process this wide range of visual information in a sufficiently general manner for autonomous social robots. We describe how to adapt them to this setting, which technical challenges remain, and briefly discuss evaluation practices.


Measuring Dependencies between Biological Signals with Self-supervision, and its Limitations

Sariyanidi, Evangelos, Herrington, John D., Yankowitz, Lisa, Chaudhari, Pratik, Satterthwaite, Theodore D., Zampella, Casey J., Schultz, Robert T., Shinohara, Russell T., Tunc, Birkan

arXiv.org Artificial Intelligence

Measuring the statistical dependence between observed signals is a primary tool for scientific discovery. However, biological systems often exhibit complex non-linear interactions that currently cannot be captured without a priori knowledge regarding the nature of dependence. We introduce a self-supervised approach, concurrence, which is inspired by the observation that if two signals are dependent, then one should be able to distinguish between temporally aligned vs. misaligned segments extracted from them. Experiments with fMRI, physiological and behavioral signals show that, to our knowledge, concurrence is the first approach that can expose relationships across such a wide spectrum of signals and extract scientifically relevant differences without ad-hoc parameter tuning or reliance on a priori information, providing a potent tool for scientific discoveries across fields. However, dependencies caused by extraneous factors remain an open problem, thus researchers should validate that exposed relationships truly pertain to the question(s) of interest.


Controlling Difficulty of Generated Text for AI-Assisted Language Learning

Jin, Meiqing, Dugan, Liam, Callison-Burch, Chris

arXiv.org Artificial Intelligence

Practicing conversations with large language models (LLMs) presents a promising alternative to traditional in-person language learning. However, most LLMs generate text at a near-native level of complexity, making them ill-suited for beginner learners (CEFR: A1-A2). In this paper, we investigate whether controllable generation techniques -- specifically modular methods that do not require model fine-tuning -- can adapt LLM outputs to better support absolute beginners. We evaluate these methods through both automatic metrics and a user study with university-level learners of Japanese. Our findings show that while prompting alone fails to control output difficulty, the use of future discriminators (Yang and Klein, 2021) significantly improves output comprehensibility (from 40.4\% to 84.3\%). We further introduce a novel token-level evaluation metric, Token Miss Rate (TMR), that quantifies the proportion of incomprehensible tokens per utterance and correlates strongly with human judgments. To support future research in AI-assisted language learning, we release our code, models, annotation tools, and dataset.


AAC with Automated Vocabulary from Photographs: Insights from School and Speech-Language Therapy Settings

Communications of the ACM

Traditional symbol-based AAC devices impose meta-linguistic and memory demands on individuals with complex communication needs and hinder conversation partners from stimulating symbolic language in meaningful moments. This work presents a prototype application that generates situation-specific communication boards formed by a combination of descriptive, narrative, and semantic related words and phrases inferred automatically from photographs. Through semi-structured interviews with AAC professionals, we investigate how this prototype was used to support communication and language learning in naturalistic school and therapy settings. We find that the immediacy of vocabulary reduces conversation partners' workload, opens up opportunities for AAC stimulation, and facilitates symbolic understanding and sentence construction. We contribute a nuanced understanding of how vocabularies generated automatically from photographs can support individuals with complex communication needs in using and learning symbolic AAC, offering insights into the design of automatic vocabulary generation methods and interfaces to better support various scenarios of use and goals.


Generative AI, Pragmatics, and Authenticity in Second Language Learning

Godwin-Jones`, Robert

arXiv.org Artificial Intelligence

There are obvious benefits to integrating generative AI (artificial intelligence) into language learning and teaching. Those include using AI as a language tutor, creating learning materials, or assessing learner output. However, due to how AI systems under-stand human language, based on a mathematical model using statistical probability, they lack the lived experience to be able to use language with the same social aware-ness as humans. Additionally, there are built-in linguistic and cultural biases based on their training data which is mostly in English and predominantly from Western sources. Those facts limit AI suitability for some language learning interactions. Stud-ies have clearly shown that systems such as ChatGPT often do not produce language that is pragmatically appropriate. The lack of linguistic and cultural authenticity has important implications for how AI is integrated into second language acquisition as well as in instruction targeting development of intercultural communication compe-tence.


AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure

Chen, Xi, Zhang, Zhiyang, Yang, Fangkai, Qin, Xiaoting, Du, Chao, Cheng, Xi, Liu, Hangxin, Lin, Qingwei, Rajmohan, Saravan, Zhang, Dongmei, Zhang, Qi

arXiv.org Artificial Intelligence

Large language model (LLM)-based AI delegates are increasingly utilized to act on behalf of users, assisting them with a wide range of tasks through conversational interfaces. Despite their advantages, concerns arise regarding the potential risk of privacy leaks, particularly in scenarios involving social interactions. While existing research has focused on protecting privacy by limiting the access of AI delegates to sensitive user information, many social scenarios require disclosing private details to achieve desired outcomes, necessitating a balance between privacy protection and disclosure. To address this challenge, we conduct a pilot study to investigate user preferences for AI delegates across various social relations and task scenarios, and then propose a novel AI delegate system that enables privacy-conscious self-disclosure. Our user study demonstrates that the proposed AI delegate strategically protects privacy, pioneering its use in diverse and dynamic social interactions.


Mixed-Session Conversation with Egocentric Memory

Jang, Jihyoung, Kim, Taeyoung, Kim, Hyounghun

arXiv.org Artificial Intelligence

Recently introduced dialogue systems have demonstrated high usability. However, they still fall short of reflecting real-world conversation scenarios. Current dialogue systems exhibit an inability to replicate the dynamic, continuous, long-term interactions involving multiple partners. This shortfall arises because there have been limited efforts to account for both aspects of real-world dialogues: deeply layered interactions over the long-term dialogue and widely expanded conversation networks involving multiple participants. As the effort to incorporate these aspects combined, we introduce Mixed-Session Conversation, a dialogue system designed to construct conversations with various partners in a multi-session dialogue setup. We propose a new dataset called MiSC to implement this system. The dialogue episodes of MiSC consist of 6 consecutive sessions, with four speakers (one main speaker and three partners) appearing in each episode. Also, we propose a new dialogue model with a novel memory management mechanism, called Egocentric Memory Enhanced Mixed-Session Conversation Agent (EMMA). EMMA collects and retains memories from the main speaker's perspective during conversations with partners, enabling seamless continuity in subsequent interactions. Extensive human evaluations validate that the dialogues in MiSC demonstrate a seamless conversational flow, even when conversation partners change in each session. EMMA trained with MiSC is also evaluated to maintain high memorability without contradiction throughout the entire conversation.


Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation

Ma, Cheng Charles, Joo, Kevin Hyekang, Vail, Alexandria K., Bhattacharya, Sunreeta, García, Álvaro Fernández, Baker-Matsuoka, Kailana, Mathew, Sheryl, Holt, Lori L., De la Torre, Fernando

arXiv.org Artificial Intelligence

Over the past decade, wearable computing devices (``smart glasses'') have undergone remarkable advancements in sensor technology, design, and processing power, ushering in a new era of opportunity for high-density human behavior data. Equipped with wearable cameras, these glasses offer a unique opportunity to analyze non-verbal behavior in natural settings as individuals interact. Our focus lies in predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion. Leveraging such analyses may revolutionize our understanding of human communication, foster more effective collaboration in professional environments, provide better mental health support through empathetic virtual interactions, and enhance accessibility for those with communication barriers. In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation. We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a ``multimodal transcript'' that can be processed by an LLM for behavioral reasoning tasks. Remarkably, this method achieves performance comparable to established fusion techniques even in its preliminary implementation, indicating strong potential for further research and optimization. This fusion method is one of the first to approach ``reasoning'' about real-world human behavior through a language model. Smart glasses provide us the ability to unobtrusively gather high-density multimodal data on human behavior, paving the way for new approaches to understanding and improving human communication with the potential for important societal benefits. The features and data collected during the studies will be made publicly available to promote further research.


ChatGPT passes the famous 'Turing test' - suggesting the AI bot has intelligence equivalent to a human, scientists claim

Daily Mail - Science & tech

Since it was first proposed in 1950, passing the'Turing test' has been seen as one of the highest goals in AI. But now, researchers claim that ChatGPT has become the first AI to pass this famous test for human intelligence. Proposed by computer pioneer Alan Turing, it claims that an AI should be considered truly intelligent if people can't tell if they are speaking to a human or machine. In a pre-print paper, cognitive scientists from UC San Diego argue that the ChatGPT-4 can fool human test subjects more than half of the time. However, the researchers say this might say more about the Turing test than it does about the intelligence of modern AI.